1,485 research outputs found

    Alternating direction algorithms for β„“0\ell_0 regularization in compressed sensing

    Full text link
    In this paper we propose three iterative greedy algorithms for compressed sensing, called \emph{iterative alternating direction} (IAD), \emph{normalized iterative alternating direction} (NIAD) and \emph{alternating direction pursuit} (ADP), which stem from the iteration steps of alternating direction method of multiplier (ADMM) for β„“0\ell_0-regularized least squares (β„“0\ell_0-LS) and can be considered as the alternating direction versions of the well-known iterative hard thresholding (IHT), normalized iterative hard thresholding (NIHT) and hard thresholding pursuit (HTP) respectively. Firstly, relative to the general iteration steps of ADMM, the proposed algorithms have no splitting or dual variables in iterations and thus the dependence of the current approximation on past iterations is direct. Secondly, provable theoretical guarantees are provided in terms of restricted isometry property, which is the first theoretical guarantee of ADMM for β„“0\ell_0-LS to the best of our knowledge. Finally, they outperform the corresponding IHT, NIHT and HTP greatly when reconstructing both constant amplitude signals with random signs (CARS signals) and Gaussian signals.Comment: 16 pages, 1 figur

    Nonextensive information theoretical machine

    Full text link
    In this paper, we propose a new discriminative model named \emph{nonextensive information theoretical machine (NITM)} based on nonextensive generalization of Shannon information theory. In NITM, weight parameters are treated as random variables. Tsallis divergence is used to regularize the distribution of weight parameters and maximum unnormalized Tsallis entropy distribution is used to evaluate fitting effect. On the one hand, it is showed that some well-known margin-based loss functions such as β„“0/1\ell_{0/1} loss, hinge loss, squared hinge loss and exponential loss can be unified by unnormalized Tsallis entropy. On the other hand, Gaussian prior regularization is generalized to Student-t prior regularization with similar computational complexity. The model can be solved efficiently by gradient-based convex optimization and its performance is illustrated on standard datasets

    Bayesian linear regression with Student-t assumptions

    Full text link
    As an automatic method of determining model complexity using the training data alone, Bayesian linear regression provides us a principled way to select hyperparameters. But one often needs approximation inference if distribution assumption is beyond Gaussian distribution. In this paper, we propose a Bayesian linear regression model with Student-t assumptions (BLRS), which can be inferred exactly. In this framework, both conjugate prior and expectation maximization (EM) algorithm are generalized. Meanwhile, we prove that the maximum likelihood solution is equivalent to the standard Bayesian linear regression with Gaussian assumptions (BLRG). The qq-EM algorithm for BLRS is nearly identical to the EM algorithm for BLRG. It is showed that qq-EM for BLRS can converge faster than EM for BLRG for the task of predicting online news popularity

    Deterministic Construction of Binary Measurement Matrices with Various Sizes

    Full text link
    We introduce a general framework to deterministically construct binary measurement matrices for compressed sensing. The proposed matrices are composed of (circulant) permutation submatrix blocks and zero submatrix blocks, thus making their hardware realization convenient and easy. Firstly, using the famous Johnson bound for binary constant weight codes, we derive a new lower bound for the coherence of binary matrices with uniform column weights. Afterwards, a large class of binary base matrices with coherence asymptotically achieving this new bound are presented. Finally, by choosing proper rows and columns from these base matrices, we construct the desired measurement matrices with various sizes and they show empirically comparable performance to that of the corresponding Gaussian matricesComment: 5 pages, 3 figure

    Johnson Type Bounds on Constant Dimension Codes

    Full text link
    Very recently, an operator channel was defined by Koetter and Kschischang when they studied random network coding. They also introduced constant dimension codes and demonstrated that these codes can be employed to correct errors and/or erasures over the operator channel. Constant dimension codes are equivalent to the so-called linear authentication codes introduced by Wang, Xing and Safavi-Naini when constructing distributed authentication systems in 2003. In this paper, we study constant dimension codes. It is shown that Steiner structures are optimal constant dimension codes achieving the Wang-Xing-Safavi-Naini bound. Furthermore, we show that constant dimension codes achieve the Wang-Xing-Safavi-Naini bound if and only if they are certain Steiner structures. Then, we derive two Johnson type upper bounds, say I and II, on constant dimension codes. The Johnson type bound II slightly improves on the Wang-Xing-Safavi-Naini bound. Finally, we point out that a family of known Steiner structures is actually a family of optimal constant dimension codes achieving both the Johnson type bounds I and II.Comment: 12 pages, submitted to Designs, Codes and Cryptograph

    Minimum Pseudo-Weight and Minimum Pseudo-Codewords of LDPC Codes

    Full text link
    In this correspondence, we study the minimum pseudo-weight and minimum pseudo-codewords of low-density parity-check (LDPC) codes under linear programming (LP) decoding. First, we show that the lower bound of Kelly, Sridhara, Xu and Rosenthal on the pseudo-weight of a pseudo-codeword of an LDPC code with girth greater than 4 is tight if and only if this pseudo-codeword is a real multiple of a codeword. Then, we show that the lower bound of Kashyap and Vardy on the stopping distance of an LDPC code is also a lower bound on the pseudo-weight of a pseudo-codeword of this LDPC code with girth 4, and this lower bound is tight if and only if this pseudo-codeword is a real multiple of a codeword. Using these results we further show that for some LDPC codes, there are no other minimum pseudo-codewords except the real multiples of minimum codewords. This means that the LP decoding for these LDPC codes is asymptotically optimal in the sense that the ratio of the probabilities of decoding errors of LP decoding and maximum-likelihood decoding approaches to 1 as the signal-to-noise ratio leads to infinity. Finally, some LDPC codes are listed to illustrate these results.Comment: 17 pages, 1 figur

    Sparse signal recovery by β„“q\ell_q minimization under restricted isometry property

    Full text link
    In the context of compressed sensing, the nonconvex β„“q\ell_q minimization with 0<q<10<q<1 has been studied in recent years. In this paper, by generalizing the sharp bound for β„“1\ell_1 minimization of Cai and Zhang, we show that the condition Ξ΄(sq+1)k<1sqβˆ’2+1\delta_{(s^q+1)k}<\dfrac{1}{\sqrt{s^{q-2}+1}} in terms of \emph{restricted isometry constant (RIC)} can guarantee the exact recovery of kk-sparse signals in noiseless case and the stable recovery of approximately kk-sparse signals in noisy case by β„“q\ell_q minimization. This result is more general than the sharp bound for β„“1\ell_1 minimization when the order of RIC is greater than 2k2k and illustrates the fact that a better approximation to β„“0\ell_0 minimization is provided by β„“q\ell_q minimization than that provided by β„“1\ell_1 minimization

    Reconstruction Guarantee Analysis of Binary Measurement Matrices Based on Girth

    Full text link
    Binary 0-1 measurement matrices, especially those from coding theory, were introduced to compressed sensing (CS) recently. Good measurement matrices with preferred properties, e.g., the restricted isometry property (RIP) and nullspace property (NSP), have no known general ways to be efficiently checked. Khajehnejad \emph{et al.} made use of \emph{girth} to certify the good performances of sparse binary measurement matrices. In this paper, we examine the performance of binary measurement matrices with uniform column weight and arbitrary girth under basis pursuit. Explicit sufficient conditions of exact reconstruction %only including γ\gamma and gg are obtained, which improve the previous results derived from RIP for any girth gg and results from NSP when g/2g/2 is odd. Moreover, we derive explicit l1/l1l_1/l_1, l2/l1l_2/l_1 and l∞/l1l_\infty/l_1 sparse approximation guarantees. These results further show that large girth has positive impacts on the performance of binary measurement matrices under basis pursuit, and the binary parity-check matrices of good LDPC codes are important candidates of measurement matrices.Comment: accepted by IEEE ISIT 201

    Unifying Decision Trees Split Criteria Using Tsallis Entropy

    Full text link
    The construction of efficient and effective decision trees remains a key topic in machine learning because of their simplicity and flexibility. A lot of heuristic algorithms have been proposed to construct near-optimal decision trees. ID3, C4.5 and CART are classical decision tree algorithms and the split criteria they used are Shannon entropy, Gain Ratio and Gini index respectively. All the split criteria seem to be independent, actually, they can be unified in a Tsallis entropy framework. Tsallis entropy is a generalization of Shannon entropy and provides a new approach to enhance decision trees' performance with an adjustable parameter qq. In this paper, a Tsallis Entropy Criterion (TEC) algorithm is proposed to unify Shannon entropy, Gain Ratio and Gini index, which generalizes the split criteria of decision trees. More importantly, we reveal the relations between Tsallis entropy with different qq and other split criteria. Experimental results on UCI data sets indicate that the TEC algorithm achieves statistically significant improvement over the classical algorithms.Comment: 6 pages, 2 figure

    Maximum Multiflow in Wireless Network Coding

    Full text link
    In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.Comment: 5 pages, 3 figures, submitted to ISIT 201
    • …
    corecore